Regularization vs. Relaxation: A conic optimization perspective of statistical variable selection

نویسندگان

  • Hongbo Dong
  • Kun Chen
  • Jeff T. Linderoth
چکیده

Variable selection is a fundamental task in statistical data analysis. Sparsity-inducing regularization methods are a popular class of methods that simultaneously perform variable selection and model estimation. The central problem is a quadratic optimization problem with an `0-norm penalty. Exactly enforcing the `0-norm penalty is computationally intractable for larger scale problems, so different sparsity-inducing penalty functions that approximate the `0-norm have been introduced. In this paper, we show that viewing the problem from a convex relaxation perspective offers new insights. In particular, we show that a popular sparsity-inducing concave penalty function known as the Minimax Concave Penalty (MCP), and the reverse Huber penalty derived in a recent work by Pilanci, Wainwright and Ghaoui, can both be derived as special cases of a lifted convex relaxation called the perspective relaxation. The optimal perspective relaxation is a related minimax problem that balances the overall convexity and tightness of approximation to the `0 norm. We show it can be solved by a semidefinite relaxation. Moreover, a probabilistic interpretation of the semidefinite relaxation reveals connections with the boolean quadric polytope in combinatorial optimization. Finally by reformulating the `0-norm penalized problem as a two-level problem, with the inner level being a Max-Cut problem, our proposed semidefinite relaxation can be realized by replacing the inner level problem with its semidefinite relaxation studied by Goemans and Williamson. This interpretation suggests using the Goemans-Williamson rounding procedure to find approximate solutions to the `0-norm penalized problem. Numerical exThe first author is supported by the Washington State University new faculty seed grant; the second author is partially supported by the Simons Foundation Award 359494; the final author is supported in part by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program under contract number DE-AC02-06CH11357. Hongbo Dong Department of Mathematics, Washington State University, Pullman, WA 99163 E-mail: [email protected] Kun Chen Department of Statistics, University of Connecticut, Storrs, CT 06269 E-mail: [email protected] Jeff Linderoth Department of Industrial and Systems Engineering, University of Wisconsin-Madison, Madison, WI 53706 E-mail: [email protected] 2 Hongbo Dong et al. periments demonstrate the tightness of our proposed semidefinite relaxation, and the effectiveness of finding approximate solutions by Goemans-Williamson rounding.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A New Perspective on Convex Relaxations of Sparse SVM

This paper proposes a convex relaxation of a sparse support vector machine (SVM) based on the perspective relaxation of mixed-integer nonlinear programs. We seek to minimize the zero-norm of the hyperplane normal vector with a standard SVM hinge-loss penalty and extend our approach to a zeroone loss penalty. The relaxation that we propose is a second-order cone formulation that can be efficient...

متن کامل

Local Loss Optimization in Operator Models: A New Insight into Spectral Learning

This paper re-visits the spectral method for learning latent variable models defined in terms of observable operators. We give a new perspective on the method, showing that operators can be recovered by minimizing a loss defined on a finite subset of the domain. This leads to a derivation of a non-convex optimization similar to the spectral method. We also propose a regularized convex relaxatio...

متن کامل

Variable Selection via A Combination of the L0 and L1 Penalties

Variable selection is an important aspect of high-dimensional statistical modelling, particularly in regression and classification. In the regularization framework, various penalty functions are used to perform variable selection by putting relatively large penalties on small coefficients. The L1 penalty is a popular choice because of its convexity, but it produces biased estimates for the larg...

متن کامل

Regularization Methods for Sum of Squares Relaxations in Large Scale Polynomial Optimization

We study how to solve sum of squares (SOS) and Lasserre’s relaxations for large scale polynomial optimization. When interior-point type methods are used, typically only small or moderately large problems could be solved. This paper proposes the regularization type methods which would solve significantly larger problems. We first describe these methods for general conic semidefinite optimization...

متن کامل

Boosting Algorithms: Regularization, Prediction and Model Fitting

We present a statistical perspective on boosting. Special emphasis is given to estimating potentially complex parametric or nonparametric models, including generalized linear and additive models as well as regression models for survival analysis. Concepts of degrees of freedom and corresponding Akaike or Bayesian information criteria, particularly useful for regularization and variable selectio...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1510.06083  شماره 

صفحات  -

تاریخ انتشار 2015